# A Frugal Self‑Driving Lab on Opentrons OT‑2: Closed‑Loop Color Matching and Acid‑Base “Battleship” ## Jon Potter1 , Yixuan Yin1, Zhishan Liu1, Shichen Liu1, John Kitchin1, Stefan Bernhard1, Joshua D. Kangas1 ## (1 Carnegie Mellon University) **Purpose** — We introduce a low‑cost, modular self‑driving lab (SDL) built on an Opentrons OT‑2 that teaches and prototypes closed‑loop experimentation. The system supports two applications: (1) dye color matching and (2) an acid‑base “Battleship” game with algorithmic players. This work has not been the subject of a podium presentation and includes new results from a 2025 deployment for \~100 high school students. **Experimental procedures** — Hardware: OT‑2 liquid handler; 96‑well plates; six fixed downward‑facing \~$70 cameras on 3D‑printed mounts; white index‑card background. Software: a persistent OT‑2 protocol watches a local “.jsonx” file every 5s; a server writes action lists and reads logs, avoiding repeated 6‑min reboots typical of per‑run uploads on Opentrons systems. Color sensing: one‑time manual well‑center calibration; radial Gaussian pixel sampling per well; online clustering with a 10‑RGB‑unit threshold; the centroid of the largest cluster is reported; plate‑wide lighting is normalized by empty‑well baselining. Modeling/active learning: We fit three Gaussian-process models (one per RGB channel) using a Constant×RBF plus White kernel. Recipe volumes obey a simplex constraint (total 200 μL). Next experiments are chosen by minimizing color-distance with an uncertainty bonus, optimized with multi-start trust-constr. Activities: (i) color matching both with three food dyes and acid/base/pH indicator as well as an 11‑guess limit per row; (ii) Battleship with water “ocean,” acid “ships,” and pH‑indicator “missiles,” with hits/misses classified by camera; teams extend a Python base class (“ShotSelection”) to build strategies; **Summary of data** — Color matching: Across ≥500 wells, the camera pipeline produced consistent RGB reads despite glare, refraction, and meniscus artifacts. In round 1 of dye matching, the majority of 12 student groups outperformed the AI; parity emerged in round 2; in rounds 3-4 the GP‑driven AI required fewer guesses to reach a ≤20‑RGB‑unit match threshold, while its poor round‑1 start kept its overall mean higher than the students’, the median favored the AI. Battleship: The Battleship tournament delivered fully automated scoring and a complete single‑elimination bracket; student bots improved markedly over the baseline heuristic. **Conclusion** — A frugal SDL can deliver reliable closed‑loop experimentation on an OT‑2 without expensive optics or vendor‑locked workflows, and it generalizes across assay‑like tasks such as quantitative color and titration‑style detection while also proving to be powerful tangible mechanism for teaching active learning models. Open‑source code and CAD are provided. **Next steps and future experiments** — Investigate stronger resilience to ambient light changes; evaluate alternate chemical color gradients; compare acquisition functions and optimizers. **Travel/approvals** — No travel restrictions anticipated; presenter will secure employer and institutional approvals prior to SLAS2026.